A Computational Model of Commonsense Moral Decision Making

نویسندگان

  • Richard Kim
  • Max Kleiman-Weiner
  • Andrés Abeliuk
  • Edmond Awad
  • Sohan Dsouza
  • Joshua B. Tenenbaum
  • Iyad Rahwan
چکیده

We introduce a new computational model of moral decision making, drawing on a recent theory of commonsense moral learning via social dynamics. Our model describes moral dilemmas as utility function that computes trade-offs in values over abstract moral dimensions, which provide interpretable parameter values when implemented in machineled ethical decision-making. Moreover, characterizing the social structures of individuals and groups as a hierarchical Bayesian model, we show that a useful description of an individual’s moral values – as well as a group’s shared values – can be inferred from a limited amount of observed data. Finally, we apply and evaluate our approach to data from the Moral Machine, a web application that collects human judgments on moral dilemmas involving autonomous vehicles. Recent advances in machine learning, notably Deep Learning, have demonstrated impressive results in various domains of human intelligence, such as computer vision (Szegedy et al.), machine translation (Wu et al., 2016), and speech generation (Oord et al., 2016). In domains as abstract as human emotion, Deep Learning has shown a proficient capacity to detect human emotions in natural language text (Felbo et al., 2017). These achievements indicate that Deep Learning will be paving the way for AI in ethical decision making. However, training Deep Learning models often requires human-labeled data numbering in the millions, and despite recent advances that enables a model to be trained from a small number of examples (Vinyals et al., 2016; Santoro et al., 2016), this constraint remains a key challenge in Deep Learning. In addition, Deep Learning models have been criticized as “blackbox” algorithms that defy attempts at interpretation (Lei, Barzilay, and Jaakkola, 2016). The viability of many Deep Learning algorithms for real-world applications in business and government has come into question as a recent legislation in the EU, slated to take effect in 2018, will ban automated decisions, including those derived from machine learning if they cause an “adverse legal effect” on the persons concerned (Goodman and Flaxman, 2016). In contrast to Deep Learning algorithms, evidence from studies in human cognition suggests that humans are able to learn and make predictions from a much smaller number of noisy and sparse examples (Tenenbaum et al., 2011). Moreover, studies have shown that humans are able to internally rationalize their moral decisions and articulate reasons for these (Haidt, 2001). Given this stark difference between the current state of machine learning and human cognition, how can we draw on the latest theories in cognitive science to design AI with the capacity to learn moral values from limited interactions with humans and make decisions with explicable processes? A recent theory from the field of cognitive science postulates that humans learn to make ethical decisions by acquiring abstract moral principles through observation and interaction with other humans in their environment (KleimanWeiner, Saxe, and Tenenbaum, 2017). This theory characterizes ethical decision as utility maximizing choice over a set of outcomes whose values are computed from weights people place on abstract moral concepts such as “kin” or “reciprocal relationship.” In addition, given the dynamics of individuals and their memberships in a group, the framework explains how an individual’s moral preferences, and the actions resulting from them, lead to a development of the group’s shared moral principles (i.e. group norms). In this work we extend the framework introduced by Kleiman-Weiner, Saxe, and Tenenbaum (2017) to explore a computational model of the human mind in moral dilemmas with binary-decisions. We characterize the decision making in moral dilemmas as a utility function that computes the trade-offs of values perceived by humans in the choices of the dilemma. These values are the weights that humans put on abstract dimensions of the dilemma; we call these weights moral principles. Furthermore, we represent an individual agent as a member of a group with many other agents that share similar moral principles; these shared moral principles of the group as an aggregate give rise to the group norm. Exploiting the hierarchical structure of individuals and group, we show how hierarchical Bayesian inference (Gelman et al., 2013) can provide a powerful mechanism to rapidly infer individual moral principles as well as the group norm with sparse and noisy data. We apply our model to the domain of autonomous vehicles (AV) through a data set from the Moral Machine, a web application that collects human judgments in ethical dilemmas involving AV.1 A recent study on public sentiment on AV reveals that endowing AI with human moral values is an http://moralmachine.mit.edu/ ar X iv :1 80 1. 04 34 6v 1 [ cs .A I] 1 2 Ja n 20 18 important step before AV can undergo widespread market adoption (Bonnefon, Shariff, and Rahwan, 2016). In light of this study, we view application of our model to understand how the human mind perceives and resolves moral dilemmas on the road as an important step towards building an AV with human moral values. This paper makes the following distinct contributions towards building an ethical AI: • Introducing a novel computational model of moral decision making that characterizes moral dilemma as a tradeoff of values along abstract moral dimensions. We show that this model well-describes how the human mind processes moral dilemmas and provides an interpretable process for an AI agent to arrive at a decision in a moral dilemma. • Characterizing the social structure of individuals and groups as a hierarchical Bayesian model, we show that the model can rapidly infer moral principles of individuals from limited number of observational data. Rapidly inferring other agents’ unique moral values will be crucial, as AI agents interact with other agents, including humans. • Demonstrating the model’s capacity to rapidly infer group’s norms, characterized as prior over individual moral preferences. Inferring shared moral values of a group is an important step towards designing an AI agent that makes socially optimal choices.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Computational modeling of dynamic decision making using connectionist networks

In this research connectionist modeling of decision making has been presented. Important areas for decision making in the brain are thalamus, prefrontal cortex and Amygdala. Connectionist modeling with 3 parts representative for these 3 areas is made based the result of Iowa Gambling Task. In many researches Iowa Gambling Task is used to study emotional decision making. In these kind of decisio...

متن کامل

NORTHWESTERN UNIVERSITY A Cognitive Model of Recognition-Based Moral Decision Making A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY Field of Computer Science By

A Cognitive Model of Recognition-Based Moral Decision Making Morteza Dehghani The study of decision making has been dominated by economic perspectives, which model people as rational agents who carefully weigh costs and benefits and try to maximize the utility of every choice, without consideration of issues such as cultural norms, religious beliefs and moral rules. However, psychological findi...

متن کامل

An Integrated Reasoning Approach to Moral Decision-Making

We present a computational model, MoralDM, which integrates several AI techniques in order to model recent psychological findings on moral decision-making. Current theories of moral decision-making extend beyond pure utilitarian models by relying on contextual factors that vary with culture. MoralDM uses a natural language system to produce formal representations from psychological stimuli, to ...

متن کامل

A Cognitive Model of Recognition-Based Moral Decision Making

The study of decision making has been dominated by economic perspectives, which model people as rational agents who carefully weigh costs and benefits and try to maximize the utility of every choice, without consideration of issues such as cultural norms, religious beliefs and moral rules. However, psychological findings indicate that in many situations people are not rational decision makers a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1801.04346  شماره 

صفحات  -

تاریخ انتشار 2018